5 research outputs found

    Methods to detect and reduce driver stress: a review

    Get PDF
    Automobiles are the most common modes of transportation in urban areas. An alert mind is a prerequisite while driving to avoid tragic accidents; however, driver stress can lead to faulty decision-making and cause severe injuries. Therefore, numerous techniques and systems have been proposed and implemented to subdue negative emotions and improve the driving experience. Studies show that conditions such as the road, state of the vehicle, weather, as well as the driver’s personality, and presence of passengers can affect driver stress. All the above-mentioned factors significantly influence a driver’s attention. This paper presents a detailed review of techniques proposed to reduce and recover from driving stress. These technologies can be divided into three categories: notification alert, driver assistance systems, and environmental soothing. Notification alert systems enhance the driving experience by strengthening the driver’s awareness of his/her physiological condition, and thereby aid in avoiding accidents. Driver assistance systems assist and provide the driver with directions during difficult driving circumstances. The environmental soothing technique helps in relieving driver stress caused by changes in the environment. Furthermore, driving maneuvers, driver stress detection, driver stress, and its factors are discussed and reviewed to facilitate a better understanding of the topic

    Sensor fusion of motion-based sign language interpretation with deep learning

    Get PDF
    Sign language was designed to allow hearing-impaired people to interact with others. Nonetheless, knowledge of sign language is uncommon in society, which leads to a communication barrier with the hearing-impaired community. Many studies of sign language recognition utilizing computer vision (CV) have been conducted worldwide to reduce such barriers. However, this approach is restricted by the visual angle and highly affected by environmental factors. In addition, CV usually involves the use of machine learning, which requires collaboration of a team of experts and utilization of high-cost hardware utilities; this increases the application cost in real-world situations. Thus, this study aims to design and implement a smart wearable American Sign Language (ASL) interpretation system using deep learning, which applies sensor fusion that “fuses” six inertial measurement units (IMUs). The IMUs are attached to all fingertips and the back of the hand to recognize sign language gestures; thus, the proposed method is not restricted by the field of view. The study reveals that this model achieves an average recognition rate of 99.81% for dynamic ASL gestures. Moreover, the proposed ASL recognition system can be further integrated with ICT and IoT technology to provide a feasible solution to assist hearing-impaired people in communicating with others and improve their quality of life

    American Sign Language Recognition Using Leap Motion Controller with Machine Learning Approach

    No full text
    Sign language is intentionally designed to allow deaf and dumb communities to convey messages and to connect with society. Unfortunately, learning and practicing sign language is not common among society; hence, this study developed a sign language recognition prototype using the Leap Motion Controller (LMC). Many existing studies have proposed methods for incomplete sign language recognition, whereas this study aimed for full American Sign Language (ASL) recognition, which consists of 26 letters and 10 digits. Most of the ASL letters are static (no movement), but certain ASL letters are dynamic (they require certain movements). Thus, this study also aimed to extract features from finger and hand motions to differentiate between the static and dynamic gestures. The experimental results revealed that the sign language recognition rates for the 26 letters using a support vector machine (SVM) and a deep neural network (DNN) are 80.30% and 93.81%, respectively. Meanwhile, the recognition rates for a combination of 26 letters and 10 digits are slightly lower, approximately 72.79% for the SVM and 88.79% for the DNN. As a result, the sign language recognition system has great potential for reducing the gap between deaf and dumb communities and others. The proposed prototype could also serve as an interpreter for the deaf and dumb in everyday life in service sectors, such as at the bank or post office

    American Sign Language Recognition Using Leap Motion Controller with Machine Learning Approach

    No full text
    Sign language is intentionally designed to allow deaf and dumb communities to convey messages and to connect with society. Unfortunately, learning and practicing sign language is not common among society; hence, this study developed a sign language recognition prototype using the Leap Motion Controller (LMC). Many existing studies have proposed methods for incomplete sign language recognition, whereas this study aimed for full American Sign Language (ASL) recognition, which consists of 26 letters and 10 digits. Most of the ASL letters are static (no movement), but certain ASL letters are dynamic (they require certain movements). Thus, this study also aimed to extract features from finger and hand motions to differentiate between the static and dynamic gestures. The experimental results revealed that the sign language recognition rates for the 26 letters using a support vector machine (SVM) and a deep neural network (DNN) are 80.30% and 93.81%, respectively. Meanwhile, the recognition rates for a combination of 26 letters and 10 digits are slightly lower, approximately 72.79% for the SVM and 88.79% for the DNN. As a result, the sign language recognition system has great potential for reducing the gap between deaf and dumb communities and others. The proposed prototype could also serve as an interpreter for the deaf and dumb in everyday life in service sectors, such as at the bank or post office

    A Comparative of Two-Dimensional Statistical Moment Invariants Features in Formulating an Automated Probabilistic Machine Learning Identification Algorithm for Forensic Application

    Get PDF
    IBIS, ALIS, EVOFINDER, and CONDOR are the massive ballistics computerised technological machines that have typically been utilisedin forensic laboratories to automatically locate similarities between images of cartridge cases and bullets. However, it imposed a long execution time and requires physical interpretation to consolidate the analysis results when employing these market-available technologies to accomplish ballistics matching tasks. Therefore, the principalobjective of this study is to propose an improvised automated probabilistic machine learningidentification algorithm by extracting the two-dimensional (2D) statistical moment invariants from the segmented region of interest (ROI) corresponding to the cartridge case and bullets images. To pursue this principal objective, several 2D statistical moment invariants have been compared and tested to determine the most suitable feature set applied in the proposed identification algorithm. The 2D statistical moment invariants employed include Orthogonal Legendre moments (OLM), Hu moments (HM), Tsirikolias-Mertzois moments (TMM), Pan-Keane moments (PKM), and Central Geometric moments (CGM). Moreover, the proposed identification algorithm is also tested in different scenarios, including based on the classification of strength association measurements between the extracted feature sets. The empirical results in this article revealed that the proposed identification algorithm applied with the CGM comprising the weak association classification yielded the best identification accuracy rates, which are >96.5% across all the sample sizes of thetrainingset. Theseempiricalresults also conveyed that the superior proposed identification algorithm in this research could be developed as a mobile application for ballistics identification that can significantly reduce the time taken and conveniently perform the ballistics identification tasks
    corecore